190 research outputs found

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Algorithms that Remember: Model Inversion Attacks and Data Protection Law

    Get PDF
    Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around `model inversion' and `membership inference' attacks, which indicate that the process of turning training data into machine learned systems is not one-way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.Comment: 15 pages, 1 figur

    Privacy, security and data protection in smart cities : a critical EU law perspective

    Get PDF
    "Smart cities" are a buzzword of the moment. Although legal interest is growing, most academic responses at least in the EU, are still from the technological, urban studies, environmental and sociological rather than legal, sectors and have primarily laid emphasis on the social, urban, policing and environmental benefits of smart cities, rather than their challenges, in often a rather uncritical fashion . However a growing backlash from the privacy and surveillance sectors warns of the potential threat to personal privacy posed by smart cities . A key issue is the lack of opportunity in an ambient or smart city environment for the giving of meaningful consent to processing of personal data; other crucial issues include the degree to which smart cities collect private data from inevitable public interactions, the "privatisation" of ownership of both infrastructure and data, the repurposing of “big data” drawn from IoT in smart cities and the storage of that data in the Cloud. This paper, drawing on author engagement with smart city development in Glasgow as well as the results of an international conference in the area curated by the author, argues that smart cities combine the three greatest current threats to personal privacy, with which regulation has so far failed to deal effectively; the Internet of Things(IoT) or "ubiquitous computing"; "Big Data" ; and the Cloud. It seeks solutions both from legal institutions such as data protection law and from "code", proposing in particular from the ethos of Privacy by Design, a new "social impact assessment" and new human:computer interactions to promote user autonomy in ambient environments

    Coding Privacy

    Get PDF
    Lawrence Lessig famously and usefully argues that cyberspace is regulated not just by law but also by norms, markets and architecture or code. His insightful work might also lead the unwary to conclude, however, that code is inherently anti-privacy, and thus that an increasingly digital world must therefore also be increasingly devoid of privacy. This paper argues briefly that since technology is a neutral tool, code can be designed as much to fight for privacy as against it, and that what matters now is to look at what incentivizes the creation of pro- rather than anti-privacy code in the mainstream digital world. This paper also espouses the idea that privacy is better built in from scratch as a feature or default, rather than a bug —the idea of privacy by design —as is more common at present, bolted on via after-the-fact privacy-enhancing technologies or PETS. Existing examples of privacy-invasive and privacy-supportive code, drawn from the worlds of social networking, spam and copyright protection, are used to show how privacy may be pushed as a brand or feature rather than a cost or bug

    Coding Privacy

    Get PDF
    Lawrence Lessig famously and usefully argues that cyberspace is regulated not just by law but also by norms, markets and architecture or code. His insightful work might also lead the unwary to conclude, however, that code is inherently anti-privacy, and thus that an increasingly digital world must therefore also be increasingly devoid of privacy. This paper argues briefly that since technology is a neutral tool, code can be designed as much to fight for privacy as against it, and that what matters now is to look at what incentivizes the creation of pro- rather than anti-privacy code in the mainstream digital world. This paper also espouses the idea that privacy is better built in from scratch as a feature or default, rather than a bug —the idea of privacy by design —as is more common at present, bolted on via after-the-fact privacy-enhancing technologies or PETS. Existing examples of privacy-invasive and privacy-supportive code, drawn from the worlds of social networking, spam and copyright protection, are used to show how privacy may be pushed as a brand or feature rather than a cost or bug

    Privacy in public spaces: what expectations of privacy do we have in social media intelligence?

    Get PDF
    In this paper we give an introduction to the transition in contemporary surveillance from top down traditional police surveillance to profiling and “pre-crime” methods. We then review in more detail the rise of open source (OSINT) and social media (SOCMINT) intelligence and its use by law enforcement and security authorities. Following this we consider what if any privacy protection is currently given in UK law to SOCMINT. Given the largely negative response to the above question, we analyse what reasonable expectations of privacy there may be for users of public social media, with reference to existing case law on art 8 of the ECHR. Two factors are in particular argued to be supportive of a reasonable expectation of privacy in open public social media communications: first, the failure of many social network users to perceive the environment where they communicate as “public”; and secondly, the impact of search engines (and other automated analytics) on traditional conceptions of structured dossiers as most problematic for state surveillance. Lastly, we conclude that existing law does not provide adequate protection for open SOCMINT and that this will be increasingly significant as more and more personal data is disclosed and collected in public without well-defined expectations of privacy

    “I spy, with my little sensor”:Fair data handling practices for robots between privacy, copyright and security

    Get PDF
    The paper suggests an amendment to Principle 4 of ethical robot design, and a demand for "transparency by design". It argues that while misleading vulnerable users as to the nature of a robot is a serious ethical issue, other forms of intentionally deceptive or unintentionally misleading aspects of robotic design pose challenges that are on the one hand more universal and harmful in their application, on the other more difficult to address consistently through design choices. The focus will be on transparent design regarding the sensory capacities of robots. Intuitive, low-tech but highly efficient privacy preserving behaviour is regularly dependent on an accurate understanding of surveillance risks. Design choices that hide, camouflage or misrepresent these capacities can undermine these strategies. However, formulating an ethical principle of "sensor transparency" is not straightforward, as openness can also lead to greater vulnerability and with that security risks. We argue that the discussion on sensor transparency needs to be embedded in a broader discussion of "fair data handling principles" for robots that involve issues of privacy, but also intellectual property rights such as copyright

    Protecting Post-Mortem Privacy:Reconsidering the Privacy Interests of the Deceased in a Digital World

    Get PDF
    Post-mortem privacy is not a recognised term of art or institutional category in general succession law or even privacy literature. It may be termed the right of a person to preserve and control what becomes of his or her reputation, dignity, integrity, secrets or memory after their death. While of established concern in disciplines such as psychology, counselling and anthropology, this notion has till now has received relatively little attention in law, especially common law. We argue that the new circumstances of the digital world, and in particular the emergence of a new and voluminous array of “digital assets” created, hosted and shared on web 2.0 intermediary platforms, and often revealing highly personal or intimate personal data, require a revisiting of this stance. An analysis of comparative common and civilian law institutions, focusing on personality rights, defamation, moral rights and freedom of testation, confirms that there is little support for post-mortem privacy in common law, and while personality rights in general have greater traction in civilian law, including their survival after death, the primary role taken by contract regulation may still mean that users of US-based intermediary platforms, wherever they are based, are deprived of post mortem privacy rights. Having establshed a crucial gap in online legal privacy protection, we suggest future protection may need to come from legislation, contract or “code” solutions, of which the first emergent into the market is Google Inactive Account Manager

    Taking the “Personal” Out of Personal Data: Durant v FSA and its Impact on the Legal Regulation of CCTV

    Get PDF
    What is "personal data" as defined by European and UK data protection legislation? The article considers how the scope of “personal data” has been narrowed in the UK at least by the controversial Court of Appeal decision in Durant v FSA . Although the case itself is about disclosure of information in the financial services sector, somewhat unpredictably the main impact of Durant has been in what at first blush seems to be a remotely connected area, that of the field of legal regulation of closed circuit TV cameras (CCTV)
    corecore